CONDITION NUMBER REGULARIZED COVARIANCE ESTIMATION By Joong - Ho Won
نویسندگان
چکیده
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
منابع مشابه
MAXIMUM LIKELIHOOD COVARIANCE ESTIMATION WITH A CONDITION NUMBER CONSTRAINT By
High dimensional covariance estimation is known to be a difficult problem, has many applications and is of current interest to the larger statistical community. We consider the problem of estimating the covariance matrix of a multivariate normal distribution in the “large p small n” setting. Several approaches to high dimensional covariance estimation have been proposed in the literature. In ma...
متن کاملITERATIVE THRESHOLDING ALGORITHM FOR SPARSE INVERSE COVARIANCE ESTIMATION By
The `1-regularized maximum likelihood estimation problem has recently become a topic of great interest within the machine learning, statistics, and optimization communities as a method for producing sparse inverse covariance estimators. In this paper, a proximal gradient method (G-ISTA) for performing `1-regularized covariance matrix estimation is presented. Although numerous algorithms have be...
متن کاملIterative Thresholding Algorithm for Sparse Inverse Covariance Estimation
The `1-regularized maximum likelihood estimation problem has recently become a topic of great interest within the machine learning, statistics, and optimization communities as a method for producing sparse inverse covariance estimators. In this paper, a proximal gradient method (G-ISTA) for performing `1-regularized covariance matrix estimation is presented. Although numerous algorithms have be...
متن کاملRegularized Autoregressive Multiple Frequency Estimation
The paper addresses a problem of tracking multiple number of frequencies using Regularized Autoregressive (RAR) approximation. The RAR procedure allows to decrease approximation bias, comparing to other AR-based frequency detection methods, while still providing competitive variance of sample estimates. We show that the RAR estimates of multiple periodicities are consistent in probabilit...
متن کاملAn adaptive method for combined covariance estimation and classification
In this paper a family of adaptive covariance estimators is proposed to mitigate the problem of limited training samples for application to hyperspectral data analysis in quadratic maximum likelihood classification. These estimators are the combination of adaptive classification procedures and regularized covariance estimators. In these proposed estimators, the semi-labeled samples (whose label...
متن کامل